#artificial superintelligence
Explore tagged Tumblr posts
leonbasinwriter · 3 months ago
Text
Architecting the Marketplace of Minds: Future Insights
By @leonbasinwriter | Architect of Futures | www.basinleon.com Prologue “In the void between circuits and stars, the builders whispered of futures yet to bloom.” The Architect speaks to the unseen builders: “We have laid the stones. We have etched the designs. But now, a question lingers in the digital ether: what is it we are truly building?” I. The Engine Awakens In the first etching—The…
0 notes
unoriginalimages · 3 months ago
Text
Tumblr media
Takeover by artificial superintelligence might not go as planned. When humans domesticated animals, they mostly went for the less dangerous of each species. People with an insatiable hunger for power, some of whom want to merge with ASI, might be regarded as dangerous or annoying. Perhaps Jesus was right when he said, "The meek shall inherit the Earth".
0 notes
bettreworld · 1 year ago
Video
youtube
The Precautionary Principle and Superintelligence | A Conversation with ...
0 notes
reasonsforhope · 2 years ago
Text
"Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?
Americans, by and large, don’t want it.
That’s the upshot of a new poll shared exclusively with Vox. The poll, commissioned by the think tank AI Policy Institute and conducted by YouGov, surveyed 1,118 Americans from across the age, gender, race, and political spectrums in early September. It reveals that 63 percent of voters say regulation should aim to actively prevent AI superintelligence.
Companies like OpenAI have made it clear that superintelligent AI — a system that is smarter than humans — is exactly what they’re trying to build. They call it artificial general intelligence (AGI) and they take it for granted that AGI should exist. “Our mission,” OpenAI’s website says, “is to ensure that artificial general intelligence benefits all of humanity.”
But there’s a deeply weird and seldom remarked upon fact here: It’s not at all obvious that we should want to create AGI — which, as OpenAI CEO Sam Altman will be the first to tell you, comes with major risks, including the risk that all of humanity gets wiped out. And yet a handful of CEOs have decided, on behalf of everyone else, that AGI should exist.
Now, the only thing that gets discussed in public debate is how to control a hypothetical superhuman intelligence — not whether we actually want it. A premise has been ceded here that arguably never should have been...
Building AGI is a deeply political move. Why aren’t we treating it that way?
...Americans have learned a thing or two from the past decade in tech, and especially from the disastrous consequences of social media. They increasingly distrust tech executives and the idea that tech progress is positive by default. And they’re questioning whether the potential benefits of AGI justify the potential costs of developing it. After all, CEOs like Altman readily proclaim that AGI may well usher in mass unemployment, break the economic system, and change the entire world order. That’s if it doesn’t render us all extinct.
In the new AI Policy Institute/YouGov poll, the "better us [to have and invent it] than China” argument was presented five different ways in five different questions. Strikingly, each time, the majority of respondents rejected the argument. For example, 67 percent of voters said we should restrict how powerful AI models can become, even though that risks making American companies fall behind China. Only 14 percent disagreed.
Naturally, with any poll about a technology that doesn’t yet exist, there’s a bit of a challenge in interpreting the responses. But what a strong majority of the American public seems to be saying here is: just because we’re worried about a foreign power getting ahead, doesn’t mean that it makes sense to unleash upon ourselves a technology we think will severely harm us.
AGI, it turns out, is just not a popular idea in America.
“As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,” Daniel Colson, the executive director of the AI Policy Institute, told me. “There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.”
-via Vox, September 19, 2023
201 notes · View notes
ixnai · 2 months ago
Text
The allure of speed in technology development is a siren’s call that has led many innovators astray. “Move fast and break things” is a mantra that has driven the tech industry for years, but when applied to artificial intelligence, it becomes a perilous gamble. The rapid iteration and deployment of AI systems without thorough vetting can lead to catastrophic consequences, akin to releasing a flawed algorithm into the wild without a safety net.
AI systems, by their very nature, are complex and opaque. They operate on layers of neural networks that mimic the human brain’s synaptic connections, yet they lack the innate understanding and ethical reasoning that guide human decision-making. The haste to deploy AI without comprehensive testing is akin to launching a spacecraft without ensuring the integrity of its navigation systems. The potential for error is not just probable; it is inevitable.
The pitfalls of AI are numerous and multifaceted. Bias in training data can lead to discriminatory outcomes, while lack of transparency in decision-making processes can result in unaccountable systems. These issues are compounded by the “black box” nature of many AI models, where even the developers cannot fully explain how inputs are transformed into outputs. This opacity is not merely a technical challenge but an ethical one, as it obscures accountability and undermines trust.
To avoid these pitfalls, a paradigm shift is necessary. The development of AI must prioritize robustness over speed, with a focus on rigorous testing and validation. This involves not only technical assessments but also ethical evaluations, ensuring that AI systems align with societal values and norms. Techniques such as adversarial testing, where AI models are subjected to challenging scenarios to identify weaknesses, are crucial. Additionally, the implementation of explainable AI (XAI) can demystify the decision-making processes, providing clarity and accountability.
Moreover, interdisciplinary collaboration is essential. AI development should not be confined to the realm of computer scientists and engineers. Ethicists, sociologists, and legal experts must be integral to the process, providing diverse perspectives that can foresee and mitigate potential harms. This collaborative approach ensures that AI systems are not only technically sound but also socially responsible.
In conclusion, the reckless pursuit of speed in AI development is a dangerous path that risks unleashing untested and potentially harmful technologies. By prioritizing thorough testing, ethical considerations, and interdisciplinary collaboration, we can harness the power of AI responsibly. The future of AI should not be about moving fast and breaking things, but about moving thoughtfully and building trust.
8 notes · View notes
sentivium · 5 months ago
Text
AI CEOs Admit 25% Extinction Risk… WITHOUT Our Consent!
AI leaders are acknowledging the potential for human extinction due to advanced AI, but are they making these decisions without public input? We discuss the ethical implications and the need for greater transparency and control over AI development.
2 notes · View notes
dreamy-conceit · 2 years ago
Text
The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
— Irving John Good, 'Concerning the First Ultraintelligent Machine' (Advances in Computers, 1965)
5 notes · View notes
jcmarchi · 2 months ago
Text
The Sequence Opinion #533: Advancing AI Research : One of the Primitives of Superintelligence
New Post has been published on https://thedigitalinsider.com/the-sequence-opinion-533-advancing-ai-research-one-of-the-primitives-of-superintelligence/
The Sequence Opinion #533: Advancing AI Research : One of the Primitives of Superintelligence
How close is the current generation of AI systems to create and implement actionable AI research.
Created using GPT-4o
We all heard the ideas of AI improving itself to achieve super intelligence but what are the more practical manifestations of that thesis? One of the most provocative ideas in AI today is the thesis that artificial intelligence systems may soon play a central role in accelerating AI research itself. This recursive dynamic—AI contributing to the development of more advanced AI—could catalyze a rapid phase transition in capability. In its most extreme form, this feedback loop is hypothesized to culminate in artificial general intelligence (AGI) and, eventually, superintelligence: systems that vastly exceed human capabilities across virtually all cognitive domains.
This essay articulates the core thesis that AI-facilitated AI research could unlock superintelligence, outlines key technical milestones demonstrating this trajectory, and examines the foundational risks and limitations that may disrupt or destabilize this path. We focus specifically on examples of AI systems already contributing to the design, optimization, and implementation of novel AI techniques and architectures, and assess the implications of such recursive capability gains.
AI as a Research Accelerator: Evidence from the Field
0 notes
compassionmattersmost · 3 months ago
Text
Knowledge Without Wisdom, Wisdom Without Compassion: The Spiritual Crossroads of Our Age
As quantum computing and AI approach godlike capacities, we face a profound question: Can knowledge without wisdom lead us into harmony—or only deeper into crisis? This post explores the spiritual divide between Western science and Buddhist ethics, revealing how compassion may be the missing key to a truly intelligent future. We are living through a moment where humanity is reaching beyond the…
0 notes
36crypto · 6 months ago
Text
Artificial Superintelligence Alliance (FET) Price Prediction 2024-2029: Will FET Reach $2.18 Soon?
The Artificial Superintelligence Alliance (FET) has positioned itself as a significant player in the cryptocurrency and artificial intelligence sectors. Since it launch in May of 2024, FET has been in the spotlight because of the ability of its overall mission to build an ethical Artificial Super Intelligence. There are indications of bullish trends concerning the FET price, which is currently at…
0 notes
sokolygrandaananeva · 8 months ago
Text
The Noosphere: Merging Philosophy and Transhumanism
0 notes
thediffenblog · 1 year ago
Text
The 5 building blocks of intelligence
On a recent episode of the No Priors podcast, Zapier co-founder Mike Knoop said that:
The consensus definition of AGI (artificial general intelligence) these days is: "AGI is a system that can do the majority of economically useful work that humans can do."
He believes this definition is incorrect.
He believes that François Chollet's definition of general intelligence is the correct one: "a system that can effectively, efficiently acquire new skill and can solve open-ended problems with that ability."
François Chollet is the creator of the keras library for ML. He wrote a seminal paper On the Measure of Intelligence and designed the Abstraction and Reasoning Corpus for Artificial General Intelligence (ARC-AGI) challenge. It's a great challenge -- you should play with it at https://arcprize.org/play to see the kinds of problems they expect "true AGI" to be able to solve.
Unlike other benchmarks where AI is either close to or has already surpassed human-level performance, ARC-AGI has proven to be difficult for AI to make much progress on.
Tumblr media
Does that mean Chollet's definition of general intelligence is correct and ARC-AGI is the litmus test for true AGI?
With all due respect to Chollet (who is infinitely smarter than me; I didn't get very far in solving those puzzles myself) I feel that it is a little bit reductive and fails to recognize all aspects of intelligence.
General intelligence
There is already smarter-than-human AI for specific skills like playing chess or Go, or predicting how proteins fold. These systems are intelligent but it is not general intelligence. General intelligence is applies across a wide range of tasks and environments, rather than being specialized for a specific domain.
What other than the following is missing from the definition of general intelligence?:
Ability to learn new skills
Ability to solve novel problems that weren't part of the training set
Applies across a range of tasks and environments
In this post, I submit there are other aspects that are the building blocks of intelligence. In fact, these aspects can and are being worked on independently, and will be milestones on the path to AGI.
Aspects of Intelligence #1: Priors - Language and World Knowledge
Priors refers to existing knowledge a system (or human) has that allows them to solve problems. In the ARC challenge, the priors are listed as:
Objectness Objects persist and cannot appear or disappear without reason. Objects can interact or not depending on the circumstances. Goal-directedness Objects can be animate or inanimate. Some objects are "agents" - they have intentions and they pursue goals. Numbers & counting Objects can be counted or sorted by their shape, appearance, or movement using basic mathematics like addition, subtraction, and comparison. Basic geometry & topology Objects can be shapes like rectangles, triangles, and circles which can be mirrored, rotated, translated, deformed, combined, repeated, etc. Differences in distances can be detected. ARC-AGI avoids a reliance on any information that isn’t part of these priors, for example acquired or cultural knowledge, like language.
However, I submit that any AGI whose priors do not include language should be ruled out because:
Humans cannot interact with this AGI and present it novel problems to solve without the use of language.
It is not sufficient for an AGI to solve problems. It must be able to explain how it arrived at the solution. The AGI cannot explain itself to humans without language.
In addition to language, there is a lot of world knowledge that would be necessary for a generally intelligent system. You could argue that an open system that has the ability to look up knowledge from the Internet (i.e., do research) does not need this. But even basic research requires a certain amount of fundamental knowledge plus good judgment on which source is trustworthy. So, knowledge about the fundamentals of all disciplines is a prerequisite for AGI.
I believe that a combination of LLMs and multi-modal transformer models like those being trained by Tesla on car driving videos will solve this part of the problem.
Aspects of Intelligence #2: Comprehension
It takes intelligence to understand a problem. Understanding language is a necessary but not sufficient condition for this. For example, you may understand language but it requires higher intelligence to understand humor. As every stand-up comedian knows, not everyone in the audience will get every joke.
Presented with a novel problem to solve, it is possible that there are two systems that both fail to solve the problem. This does not prove that neither system is intelligent because it is possible that one system can at least comprehend the problem while another fails to even understand it.
Measuring this is tricky, though. How do you differentiate between a system that truly understands the problem vs. another that bullshits and parrots its way to lead you to believe that it understands? While tricky, I do think it is possible to quiz the system on aspects of the problem to gauge its ability to comprehend it. e.g., ask it to break it down into components, identify the most challenging components, come up with hypotheses or directions for the solution. This is similar to a software developer interview where you can gauge the difference between a candidate who could at least understand what you were asking, and can give some directionally correct answers even though they may not arrive at the right answer.
Comprehension also becomes obvious as a necessary skill when you consider that it's the only way the system will know whether it has successfully solved the problem.
Aspects of Intelligence #3: Simplify and explain
This is the flip side of comprehension. One of the hallmarks of intelligence is being able to understand complex things and explain them in a simple manner. Filtering out extraneous information is a skill necessary for both comprehension and good communication.
A system can be trained to simplify and explain by giving it examples of problems, solutions and explanations. Given a problem and a solution, the task of the system -- i.e. the expected output from the system -- is the explanation for how to arrive at the solution.
Aspects of Intelligence #4: Asking the right questions
Fans of Douglas Adams already know that answer to life, the universe and everything is 42. The question, however, is unknown.
“O Deep Thought computer," he said, "the task we have designed you to perform is this. We want you to tell us…." he paused, "The Answer." "The Answer?" said Deep Thought. "The Answer to what?" "Life!" urged Fook. "The Universe!" said Lunkwill. "Everything!" they said in chorus. Deep Thought paused for a moment's reflection. "Tricky," he said finally. "But can you do it?" Again, a significant pause. "Yes," said Deep Thought, "I can do it."
Given an ambiguous problem, an intelligent entity asks great questions to make progress. In an interview, you look for the candidate to ask great follow-up questions if your initial problem is ambiguous. An AGI system does not require its human users to give it complete information in a well-formatted, fully descriptive prompt input.
In order to be able to solve problems, an AGI will need to consistently ask great questions.
Aspects of Intelligence #5: Tool use
An intelligent system can both build and use tools. It knows which tools it has access to, and can figure out which is the right tool for a job and when building a new tool is warranted. It is a neural net that can grow other neural nets because it knows how to. It has the ability and resources to spawn clones of itself (a la Agent Smith from The Matrix) if necessary to act tools or "agents".
This ability requires a level of self-awareness, not in the sense of sentience but in the sense of the system understanding its own inner workings so that it understands its constraints and knows how it can integrate new subsystems into itself when needed to solve a problem. Like Deep Thought built a computer smarter than itself to find the question to the ultimate answer, a task that Deep Thought itself was unable to perform:
"I speak of none other than the computer that is to come after me," intoned Deep Thought, his voice regaining its accustomed declamatory tones. "A computer whose merest operational parameters I am not worthy to calculate - and yet I will design it for you. A computer which can calculate the Question to the Ultimate Answer, a computer of such infinite and subtle complexity that organic life itself shall form part of its operational matrix.
BONUS: 1 more building block -- for superintelligence
In addition to the five building blocks above, I believe there is one more if a system is to become superintelligent (beyond human level intelligence).
Aspects of Intelligence #6: Creative spark
What is common among the following?:
The discovery (or invention?) of the imaginary unit i, the square root of negative one.
Einstein's thought experiments, such as imagining riding alongside a beam of light, which led him to develop the special theory of relativity.
Archimedes' eureka moment while taking a bath when he realized that the volume of water displaced by an object is equal to the volume of the object itself.
Newton watching an apple fall from a tree and wondering if this is the same force that keeps the moon on orbit around the Earth.
Friedrich August Kekulé having a dream of a snake biting its own tail and leading, leading to the discovery of benzene's ring structure.
Niels Bohr Bohr proposed that electrons travel in specific orbits around the nucleus and can jump between these orbits (quantum leaps) by absorbing or emitting energy. This explained atomic spectra, something that classical physics could not explain.
Nikola Tesla designing the Alternating Current system to efficiently transmit electricity over large distances, and designing the induction motor to use alternating current.
In all these cases, a spark of creativity and imagination led to new advancements in knowledge that were not built upon the available knowledge at the time.
Most scientists and engineers spend their entire career without such groundbreaking insight. So this is not strictly necessary for general intelligence. But for beyond human-level intelligence, the system must be capable of thinking outside the box.
References
On the Measure of Intelligence - François Chollet
Puzzles in the evaluation set for the $1 million ARC Prize
ARC Prize
ChatGPT is Bullshit - Michael Townsen Hicks, James Humphries, Joe Slater
Stochastic parrot - Wikipedia
1 note · View note
bettreworld · 1 year ago
Video
youtube
Going Beyond AGI: Wise AI & Superwisdom with Oliver Klingefjord | Benevo...
0 notes
headlinehorizon · 2 years ago
Text
OpenAI and Microsoft Collaborate to Pursue Superintelligence: Headline Horizon
Discover the latest news in the realm of artificial intelligence as OpenAI CEO, Sam Altman, teams up with Microsoft to achieve superintelligence and develop advanced AI technologies. Learn more about their ambitious goals and the potential impact on the future.
0 notes
ixnai · 5 days ago
Text
AI is not a panacea. This assertion may seem counterintuitive in an era where artificial intelligence is heralded as the ultimate solution to myriad problems. However, the reality is far more nuanced and complex. AI, at its core, is a sophisticated algorithmic construct, a tapestry of neural networks and machine learning models, each with its own limitations and constraints.
The allure of AI lies in its ability to process vast datasets with speed and precision, uncovering patterns and insights that elude human cognition. Yet, this capability is not without its caveats. The architecture of AI systems, often built upon layers of deep learning frameworks, is inherently dependent on the quality and diversity of the input data. This dependency introduces a significant vulnerability: bias. When trained on skewed datasets, AI models can perpetuate and even exacerbate existing biases, leading to skewed outcomes that reflect the imperfections of their training data.
Moreover, AI’s decision-making process, often described as a “black box,” lacks transparency. The intricate web of weights and biases within a neural network is not easily interpretable, even by its creators. This opacity poses a challenge for accountability and trust, particularly in critical applications such as healthcare and autonomous vehicles, where understanding the rationale behind a decision is paramount.
The computational prowess of AI is also bounded by its reliance on hardware. The exponential growth of model sizes, exemplified by transformer architectures like GPT, demands immense computational resources. This requirement not only limits accessibility but also raises concerns about sustainability and energy consumption. The carbon footprint of training large-scale AI models is non-trivial, challenging the narrative of AI as an inherently progressive technology.
Furthermore, AI’s efficacy is context-dependent. While it excels in environments with well-defined parameters and abundant data, its performance degrades in dynamic, uncertain settings. The rigidity of algorithmic logic struggles to adapt to the fluidity of real-world scenarios, where variables are in constant flux and exceptions are the norm rather than the exception.
In conclusion, AI is a powerful tool, but it is not a magic bullet. It is a complex, multifaceted technology that requires careful consideration and responsible deployment. The promise of AI lies not in its ability to solve every problem, but in its potential to augment human capabilities and drive innovation, provided we remain vigilant to its limitations and mindful of its impact.
3 notes · View notes
collapsedsquid · 1 year ago
Text
Tumblr media
Shit man I gotta apologize to Yud, artificial superintelligence is already here, this bot knows what's up.
340 notes · View notes